我们介绍MR-NET,这是一种用于多分辨率神经网络的一般体系结构,也是基于此体系结构进行成像应用的框架。我们的基于坐标的网络在空间和规模上都是连续的,因为它们由多个阶段组成,这些阶段逐渐增加了更细节。除此之外,它们是一个紧凑而有效的表示。我们展示了多分辨率图像表示以及用于纹理放大和缩小以及抗脉化的应用。
translated by 谷歌翻译
我们引入了一个神经隐式框架,该框架利用神经网络的可区分特性和点采样表面的离散几何形状,以将它们作为神经隐含函数的级别集近似。为了训练神经隐式函数,我们提出了近似签名距离函数的损失功能,并允许具有高阶导数的术语,例如曲率的主要方向之间的对齐方式,以了解更多几何细节。在训练过程中,我们考虑了基于点采样表面的曲率的不均匀采样策略,以优先考虑点更多的几何细节。与以前的方法相比,这种抽样意味着在保持几何准确性的同时更快地学习。我们还介绍了神经表面(例如正常矢量和曲率)的分析差异几何公式。
translated by 谷歌翻译
The field of robotics, and more especially humanoid robotics, has several established competitions with research oriented goals in mind. Challenging the robots in a handful of tasks, these competitions provide a way to gauge the state of the art in robotic design, as well as an indicator for how far we are from reaching human performance. The most notable competitions are RoboCup, which has the long-term goal of competing against a real human team in 2050, and the FIRA HuroCup league, in which humanoid robots have to perform tasks based on actual Olympic events. Having robots compete against humans under the same rules is a challenging goal, and, we believe that it is in the sport of archery that humanoid robots have the most potential to achieve it in the near future. In this work, we perform a first step in this direction. We present a humanoid robot that is capable of gripping, drawing and shooting a recurve bow at a target 10 meters away with considerable accuracy. Additionally, we show that it is also capable of shooting distances of over 50 meters.
translated by 谷歌翻译
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
translated by 谷歌翻译
Bi-encoders and cross-encoders are widely used in many state-of-the-art retrieval pipelines. In this work we study the generalization ability of these two types of architectures on a wide range of parameter count on both in-domain and out-of-domain scenarios. We find that the number of parameters and early query-document interactions of cross-encoders play a significant role in the generalization ability of retrieval models. Our experiments show that increasing model size results in marginal gains on in-domain test sets, but much larger gains in new domains never seen during fine-tuning. Furthermore, we show that cross-encoders largely outperform bi-encoders of similar size in several tasks. In the BEIR benchmark, our largest cross-encoder surpasses a state-of-the-art bi-encoder by more than 4 average points. Finally, we show that using bi-encoders as first-stage retrievers provides no gains in comparison to a simpler retriever such as BM25 on out-of-domain tasks. The code is available at https://github.com/guilhermemr04/scaling-zero-shot-retrieval.git
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译
在机器学习中,使用算法 - 不足的方法是一个新兴领域,用于解释单个特征对预测结果的贡献。尽管重点放在解释预测本身上,但已经做了一些解释这些模型的鲁棒性,即每个功能如何有助于实现这种鲁棒性。在本文中,我们建议使用沙普利值来解释每个特征对模型鲁棒性的贡献,该功能以接收器操作特性(ROC)曲线和ROC曲线(AUC)下的面积来衡量。在一个说明性示例的帮助下,我们证明了解释ROC曲线的拟议思想,并可以看到这些曲线中的不确定性。对于不平衡的数据集,使用Precision-Recall曲线(PRC)被认为更合适,因此我们还演示了如何借助Shapley值解释PRC。
translated by 谷歌翻译
通用近似定理断言,单个隐藏层神经网络在紧凑型集合上具有任何所需的精度,可以近似连续函数。作为存在的结果,通用近似定理支持在各种应用程序中使用神经网络,包括回归和分类任务。通用近似定理不仅限于实现的神经网络,而且还具有复杂,季节,Tessarines和Clifford值的神经网络。本文扩展了广泛的超复杂性神经网络的通用近似定理。确切地说,我们首先介绍非分类超复杂代数的概念。复数,偶数和苔丝是非分类超复合代数的示例。然后,我们陈述了在非分类代数上定义的超复合值的神经网络的通用近似定理。
translated by 谷歌翻译
主成分分析(PCA)是信号处理中无处不在的维度降低技术,搜索一个投影矩阵,该矩阵最小化了还原数据集和原始数据集之间的平方误差。由于经典的PCA并非量身定制用于解决与公平性有关的问题,因此其对实际问题的应用可能会导致不同群体的重建错误(例如,男人和女人,白人和黑人等)的差异,并带来可能有害的后果,例如引入偏见对敏感群体。尽管最近提出了几种公平的PCA版本,但在搜索算法中仍然存在基本差距,这些算法足够简单,可以部署在实际系统中。为了解决这个问题,我们提出了一种新颖的PCA算法,该算法通过一个简单的策略来解决公平问题,该策略包括一维搜索,该搜索利用了PCA的封闭形式解决方案。如数值实验所证明的那样,该提案可以通过总体重建误差的损失很小,而无需诉诸复杂的优化方案,从而显着提高公平性。此外,我们的发现在几种真实情况以及在具有不平衡和平衡数据集的情况下是一致的。
translated by 谷歌翻译
热带森林代表了地球上许多物种的动植物的家园,保留了数十亿吨的碳足迹,促进云层和雨水形成,这意味着在全球生态系统中起着至关重要的作用,除了代表无数土著人民的家中。不幸的是,由于森林砍伐或退化,每年丧失数百万公顷的热带森林。为了减轻这一事实,除了预防和惩罚罪犯的公共政策外,还使用了监视和森林砍伐检测计划。这些监视/检测程序通常使用遥感图像,图像处理技术,机器学习方法和专家照片解释来分析,识别和量化森林覆盖的可能变化。几个项目提出了不同的计算方法,工具和模型,以有效地识别最近的森林砍伐区域,从而改善了热带森林中的森林砍伐监测计划。从这个意义上讲,本文提出了基于神经进化技术(整洁)的模式分类器在热带森林森林砍伐检测任务中的使用。此外,已经创建并获得了一个名为E-Neat的新颖框架,并实现了超过$ 90 \%$的分类结果,用于在目标应用中使用极为降低和有限的训练集用于学习分类模型。这些结果代表了本文比较的最佳基线合奏方法的相对增益$ 6.2 \%$
translated by 谷歌翻译